4 research outputs found

    Improvement of alzheimer disease diagnosis accuracy using ensemble methods

    Get PDF
    Nowadays, there is a significant increase in the medical data that we should take advantage of that. The application of the machine learning via the data mining processes, such as data classification depends on using a single classification algorithm or those complained as ensemble models. The objective of this work is to improve the classification accuracy of previous results for Alzheimer disease diagnosing. The Decision Tree algorithm with three types of ensemble methods combined, which are Boosting, Bagging and Stacking. The clinical dataset from the Open Access Series of Imaging Studies (OASIS) was used in the experiments. The experimental results of the proposed approach were better than the previous work results. Where the Random Forest (Bagging) achieved the highest accuracy among all algorithms with 90.69%, while the lowest one was Stacking with 79.07%. All these results generated in this paper are higher in accuracy than that done before

    Deep Learning Algorithms for Forecasting COVID-19 Cases in Saudi Arabia

    No full text
    In the recent past, the COVID-19 epidemic has impeded global economic progress and, by extension, all of society. This type of pandemic has spread rapidly, posing a threat to human lives and the economy. Because of the growing scale of COVID-19 cases, employing artificial intelligence for future prediction purposes during this pandemic is crucial. Consequently, the major objective of this research paper is to compare various deep learning forecasting algorithms, including auto-regressive integrated moving average, long short-term memory, and conventional neural network techniques to forecast how COVID-19 would spread in Saudi Arabia in terms of the number of people infected, the number of deaths, and the number of recovered cases. Three different time horizons were used for COVID-19 predictions: short-term forecasting, medium-term forecasting, and long-term forecasting. Data pre-processing and feature extraction steps were performed as an integral part of the analysis work. Six performance measures were applied for comparing the efficacy of the developed models. LSTM and CNN algorithms have shown superior predictive precision with errors of less than 5% measured on available real data sets. The best model to predict the confirmed death cases is LSTM, which has better RMSE and R2 values. Still, CNN has a similar comparative performance to LSTM. LSTM unexpectedly performed badly when predicting the recovered cases, with RMSE and R2 values of 641.3 and 0.313, respectively. This work helps decisionmakers and health authorities reasonably evaluate the status of the pandemic in the country and act accordingly

    Optimization of Fog Computing Efficiency by Decreasing the Latency Level in the Medical Environment

    No full text
    Fog computing is the latest technique in today's cloud computing environment, with the advent of this technology. One of the most important parameters affecting fog computing performance is latency. Cloud computing was very effective but needed help in its efficiency according to the values of this parameter, mainly due to its distributed nature. With the advent of the "fog" technology, the big cloud was separated into many small cloudlets, where each cloudlet is a mobility-enhanced small scale. A new architectural element extends today's cloud computing infrastructure. The term "Cloudlets" combined to form fog Computing, which has the main advantages of improving its characteristics listed above. This paper aims to improve the efficiency of the cloud-computing environment by developing a simulation system to test the best structure before applying it in a real environment. This system will improve the efficiency based on the latency parameter to provide researchers and interested parties with a clear vision to improve the work in a simulation environment and achieve the same methodology in real environments. The simulation system was developed in such a way as to reduce the delay of tasks to increase the performance rate. The results of the simulation system are promising and can be applied in the medical environment

    Deep Learning Algorithms for Forecasting COVID-19 Cases in Saudi Arabia

    No full text
    In the recent past, the COVID-19 epidemic has impeded global economic progress and, by extension, all of society. This type of pandemic has spread rapidly, posing a threat to human lives and the economy. Because of the growing scale of COVID-19 cases, employing artificial intelligence for future prediction purposes during this pandemic is crucial. Consequently, the major objective of this research paper is to compare various deep learning forecasting algorithms, including auto-regressive integrated moving average, long short-term memory, and conventional neural network techniques to forecast how COVID-19 would spread in Saudi Arabia in terms of the number of people infected, the number of deaths, and the number of recovered cases. Three different time horizons were used for COVID-19 predictions: short-term forecasting, medium-term forecasting, and long-term forecasting. Data pre-processing and feature extraction steps were performed as an integral part of the analysis work. Six performance measures were applied for comparing the efficacy of the developed models. LSTM and CNN algorithms have shown superior predictive precision with errors of less than 5% measured on available real data sets. The best model to predict the confirmed death cases is LSTM, which has better RMSE and R2 values. Still, CNN has a similar comparative performance to LSTM. LSTM unexpectedly performed badly when predicting the recovered cases, with RMSE and R2 values of 641.3 and 0.313, respectively. This work helps decisionmakers and health authorities reasonably evaluate the status of the pandemic in the country and act accordingly
    corecore